Analog VLSI Implementation of Gradient Descent

نویسندگان

  • David B. Kirch
  • Douglas Kerns
  • Kurt W. Fleischer
  • Alan H. Barr
چکیده

We describe an analog VLSI implementation of a multi-dimensional gradient estimation and descent technique for minimizing an onchip scalar function fO. The implementation uses noise injection and multiplicative correlation to estimate derivatives, as in [Anderson, Kerns 92]. One intended application of this technique is setting circuit parameters on-chip automatically, rather than manually [Kirk 91]. Gradient descent optimization may be used to adjust synapse weights for a backpropagation or other on-chip learning implementation. The approach combines the features of continuous multi-dimensional gradient descent and the potential for an annealing style of optimization. We present data measured from our analog VLSI implementation.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer Networks

Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms such as back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. It is shown that using gradient descent with direct approximation of the gradient instead of back-propagation...

متن کامل

Image Sharpness and Beam Focus VLSI Sensors for Adaptive Optics

High-resolution wavefront control for adaptive optics requires accurate sensing of a measure of optical quality. We present two analog very-large-scale-integration (VLSI) image-plane sensors that supply real-time metrics of image and beam quality, for applications in imaging and line-of-sight laser communication. The image metric VLSI sensor quantifies sharpness of the received image in terms o...

متن کامل

A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks

Typical methods for gradient descent in neural network learning involve calculation of derivatives based on a detailed knowledge of the network model. This requires extensive, time consuming calculations for each pattern presentation and high precision that makes it difficult to implement in VLSI. We present here a perturbation technique that measures, not calculates, the gradient. Since the te...

متن کامل

A Study of Parallel Perturbative Gradient Descent

We have continued our study of a parallel perturbative learning method [Alspector et al., 1993] and implications for its implementation in analog VLSI. Our new results indicate that, in most cases, a single parallel perturbation (per pattern presentation) of the function parameters (weights in a neural network) is theoretically the best course. This is not true, however, for certain problems an...

متن کامل

Analog VLSI for robot path planning

Analog VLSI provides a convenient and high-performance engine for robot path planning. Laplace's equation is a useful formulation of the path planning problem; however, digital solutions are very expensive. Since high precision is not required an analog approach is attractive. A resistive network can be used to model the robot's domain with various boundary conditions for the source, target, an...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1992